296 research outputs found
Linear Scaling Calculations of Excitation Energies with Active-Space Particle-Particle Random Phase Approximation
We developed an efficient active-space particle-particle random phase
approximation (ppRPA) approach to calculate accurate charge-neutral excitation
energies of molecular systems. The active-space ppRPA approach constrains both
indexes in particle and hole pairs in the ppRPA matrix, which only selects
frontier orbitals with dominant contributions to low-lying excitation energies.
It employs the truncation in both orbital indexes in the particle-particle and
the hole-hole spaces. The resulting matrix, the eigenvalues of which are
excitation energies, has a dimension that is independent of the size of the
systems. The computational effort for the excitation energy calculation,
therefore, scales linearly with system size and is negligible compared with the
ground-state calculation of the (N-2)-electron system, where N is the electron
number of the molecule. With the active space consisting of 30 occupied and 30
virtual orbitals, the active-space ppRPA approach predicts excitation energies
of valence, charge-transfer, Rydberg, double and diradical excitations with the
mean absolute errors (MAEs) smaller than 0.03 eV compared with the full-space
ppRPA results. As a side product, we also applied the active-space ppRPA
approach in the renormalized singles (RS) T-matrix approach. Combining the
non-interacting pair approximation that approximates the contribution to the
self-energy outside the active space, the active-space
@PBE approach predicts accurate absolute and
relative core-level binding energies with the MAE around 1.58 eV and 0.3 eV,
respectively. The developed linear scaling calculation of excitation energies
is promising for applications to large and complex systems
Variance-Covariance Regularization Improves Representation Learning
Transfer learning has emerged as a key approach in the machine learning
domain, enabling the application of knowledge derived from one domain to
improve performance on subsequent tasks. Given the often limited information
about these subsequent tasks, a strong transfer learning approach calls for the
model to capture a diverse range of features during the initial pretraining
stage. However, recent research suggests that, without sufficient
regularization, the network tends to concentrate on features that primarily
reduce the pretraining loss function. This tendency can result in inadequate
feature learning and impaired generalization capability for target tasks. To
address this issue, we propose Variance-Covariance Regularization (VCR), a
regularization technique aimed at fostering diversity in the learned network
features. Drawing inspiration from recent advancements in the self-supervised
learning approach, our approach promotes learned representations that exhibit
high variance and minimal covariance, thus preventing the network from focusing
solely on loss-reducing features.
We empirically validate the efficacy of our method through comprehensive
experiments coupled with in-depth analytical studies on the learned
representations. In addition, we develop an efficient implementation strategy
that assures minimal computational overhead associated with our method. Our
results indicate that VCR is a powerful and efficient method for enhancing
transfer learning performance for both supervised learning and self-supervised
learning, opening new possibilities for future research in this domain.Comment: 16 pages, 2 figure
- …